When we pass the Turing test�

Greg Detre

Wednesday, 24 April, 2002

 

I don't think anyone is going to build a machine to pass the Turing test[1] for a considerable period of time. I see it as the last and archetypal problem of AI: when it falls (and maybe even slightly before), machines will be in a position to improve themselves at a rate of progress that will quickly outstrip previous progress by orders of magnitude (see article on superhuman intelligence). This is just another way of stating that the Turing test is a sufficient rather than necessary test of intelligence (i.e. it sets the bar too high). Depending on your perspective, this can be seen as one of its deficiencies, or as a virtue (as Turing himself remarked often (check this in Turing (1950)). When we can communicate with a machine using English or some other natural language at a level indistinguishable from conversation with another human (perhaps I should say, an (extremely) intelligent cognitive scientist), then it must be the case that that machine understands everything we do about what it is to be human, intelligent, and about its own construction, and so will be in a position to improve upon that construction at least as well as we can. of course, the machine-intelligence will presumably be able to do so at a much faster rate (even if we adopt Hofstadter's[2] proposal that any machine that can count and understands maths as we do may suffer from the same slowness of basic arithmetical calculations (although of course it will be able to carry around an independent internal calculator more easily), and conceivably, the same fallibilities of boredom, capriciousness, distraction and tiredness that may be (at least to begin with) unavoidably epiphenomena of intelligence). It seems plausible that even in the distant future we are discussing, it will still be much easier to expand the computational capacity of such a system, or simply to run it faster, than to improve upon the physical implementations of our own brains (see man-machine minds section below). Moreover, their interface with information, stored digitally, even if external to their human-like 'memories' will be faster and seamless compared with ours (even if everyone has switched to Dvorak keyboards or speech recognition by then). And of course, we'll almost certainly be able to clone such systems more easily than educate children to professor level.

 

There are at least two alternatives that are more plausible than this scenario, and they are not independent. The two linked scenarios that might shift the focus away from the Turing test would be that

a) There will be no demand for indistinguishably human-like machine intelligence, perhaps because it would require an enormous effort (e.g. on the part of material science) to model and reproduce our bodies physiologically (even at a cellular level), down to the exact and contingent means by which our sub-cognitive framework develops (including a huge cataloguing effort, esp. for neuroanatomists), perhaps because people find the though unsettling/we like the idea of different types of intelligences (and genetic engineering has broken down our xenophobia based on a pure prototypic notion of human nature, and mainly because we come to recognise that many of the foibles of human beings are simply (contingent) foibles/flaws

b) We move away from developing machines as self-standing, external intelligences independent of humanity and more as enhancements to our brains that produce a qualitative improvement in intelligence (rather than simply a laptop inside our heads), since developing machines to human-level intelligence from scratch is conceivable but impractical, without advantage and potentially foolhardy). Why do I consider this scenario plausible? It seems likely to me that any machine that passes the Turing test will do so in large part through the knowledge and understanding of intelligence gained from studying nature's solution(s) (i.e. the (human) brain). Now, it may be that replicating nature's efforts may only require a thorough understanding of nature's methods, especially those that are self-organising and robust (evolution and connectionism (and development???)), and so we may be able to evolve a neural network brain nestled inside a vaguely human-like robotic body to pass the Turing test without fully being able to specify (at a low level) how it, or we, do it. But I think that it's much more likely that inter-dependent advances in neurophysiology, computational neuroscience and AI (which will be largely a non-biologically plausible version of computational neuroscience/physiology) will drive each other, and that our understanding of our own brain will be at the same level by the time we would be in a position to build something to pass the Turing test. As a result, the AI enterprise will long before have been subsumed by the goal of enhancing our own brains. Rather than building a robot brain + body from scratch, we will transform ourselves, merging the man-machine boundary, in a fashion highly beneficial to us as a race of sentient beings.

This enhancement might be on a number of fronts (of which I'll consider perhaps the two most important): basic cognitive improvement, allowing us to read and understand faster, and more complex issues; in the same way that drugs like lithium are used to control pathologically wayward emotion mechanisms now, we might (under highly regulated conditions) be moved to increase individuals' ability to empathise with others. Perhaps this is a simplification, but it seems to me that morality can be predicated either:

weakly upon an indoctrinated system of values, usually inculcated by parents during childhood (with rewards and punishments???)

or more strongly (???) upon an ability to see others as like oneself by understanding their behaviour (even when we don't (fully/necessarily) condone it) and so view things from their perspective - this is the understanding that i think underlies jesus' golden rule and kant's categorical imperative, and which can be a liberating foundation for morality, rather than the stricture it is usually painted as. It requires and results from greater cognitive and empathetic capacities in moral agents, and I entertain the hope that the two enhancements I have considered might spontaneously give rise to a greater and richer moral sense in our species.

To return to the larger point I was trying to make, because we could enhance ourselves, the motivation for building bigger and better machines in our image would be undermined, and so the scenario Turing envisioned (and currently being pursued across the AI board) of independent autonomous machines which we could assess with the Turing test would simply never be realised. In this way, our successor species would be modified versions of ourselves rather than machines created by us. Going to the trouble of then building machines that could pass the Turing test would be a valueless, regressive exercise, and monumental waste of effort.

Of course, you might think that the reasons that drive robot-building now (i.e. for the dirty, dangerous and dull tasks) might also be good reasons for building intelligent machines that we could use instead of people - but I think there would be ethical issues (i.e. machine rights) that we will come to have to recognise as being as inalienably as human rights in operation for any machine capable of doing a task that really requires a human.

As a consequence, I think most likely scenario lies somewhere in between. Perhaps the focus of our efforts will be in self-enhancement, but we will recognise that different beings are constitutionally better suited to different tasks and environments. Just as genetic engineering is likely to lead to at least some degree of sub-speciation (for better or worse[3]), so will different people's choices lead to different options and degrees of neural/machine-enhancement. In a similar vein, we might build/grow entirely robotic (i.e. non-human) bodies, and use (semi-)biological material (e.g. neurons) as brains. Either way, we can expect that the human race will cease to exist as a (relatively closely-knit) genetically-defined collective. In this case, there may not be such a thing as a single definitive Turing test, because there will not be a single, definitive humanity, any more than it would make sense now to ask, 'which language should the Turing test be taken in?'.

 

As will probably be evident from the slighly far-fetched tone of these scenarios, unlike Ray Kurzweil[4], I definitely don't think we're going to build a machine to pass the Turing test within the next 30 years. As I have said, I'm not convinced that this is the way that we will consider intelligence (i.e. that it will be considered a valuable benchmark), that there will be a means of distinguishing between man and machine, or that it will even make sense to consider there being a Turing test to pass in a few centuries' time. But if we sensibly ask instead, at what point in the future do I think our technology will be in a position to build from scratch (i.e. without using any human (or biological at all???) tissue) a machine that could pass for a human if it took a Turing test today. Kurzweil points to the exponential increase in the rate of progress, which we can expect to continue at least for the medium-term, as being a major reason for his optimism. This is a good point, especially when considered in the light of an increasing number (in absolute and proportional terms) of people being educated to beyond a university level, increasing computational power at researchers' disposal, increasing absolute wealth etc.

However, I think that the Turing test makes such enormous demands that I don't think that we will be seeing machines that pass even a restricted version in the next 50 years. By 'restricted', I have in mind a version where perhaps conversation is restricted to an (ideally) intellectual domain, like maths or philosophy, rather than a domain like films or food (where the peculiarities of our chemical and physiological make-up exclude any machine that isn't extremely similar to us in almost every respect. However, I think it is very important that judges be allowed to use tricks (like seeing whether the participants can identify nonsense sentences as such), and that conversations be lengthy (five minutes is still so short that clever manoeuvring on the part of the programmer can railroad the conversation or employ stock responses[5]). Alternatively, we could restrict the test by placing machines up against children (that is, requiring a winning machine to be indistinguishable to judges from real children of a given age), whose command of language and understanding of the world is more restricted and simplified than that of adults.

It may be that machines will start to be perfectly adequate for understanding and even responding verbally in situations like a multi-player team Quake game, or about maths, within this century, but I doubt that any machine will be able to pass an unrestricted-conversational-domain Turing test against a 10 year old child this century.

 

 



[1] Turing (1950), (http://www.abelard.org/turpap/turpap.htm)

[2] Hofstadter, D. (1979), G�, Escher, Bach etc., Basic Books � he discusses this briefly at least 2/3 of the way through

[3] e.g. see Fukuyama, F. (2002), Our posthuman future etc., Farrar Straus & Giroux

[4] See http://www.longbets.org/bet/1 and Kurzweil, �A Wager on the Turing Test: Why I Think I Will Win� http://www.kurzweilai.net/meme/frame.html?main=/articles/art0374.html

[5] See Shieber's 'Lessons from a restricted Turing test' (http://www.eecs.harvard.edu/shieber/papers/loebner-rev-html/loebner-rev-html.html), or Hutchens� 'How to cheat at the Turing test' (http://ciips.ee.uwa.edu.au/Papers/Technical_Reports/1997/05/Index.html)